Opposite Online Learning via Sequentially Integrated Stochastic Gradient Descent Estimators

نویسندگان

چکیده

Stochastic gradient descent algorithm (SGD) has been popular in various fields of artificial intelligence as well a prototype online learning algorithms. This article proposes novel and general framework one-sided testing for streaming data based on SGD, which determines whether the unknown parameter is greater than certain positive constant. We construct online-updated test statistic sequentially by integrating selected batch-specific estimator or its opposite, referred to opposite learning. The estimators are chosen strategically according proposed sequential tactics designed two-armed bandit process. Theoretical results prove advantage strategy ensuring distribution be optimal under null hypothesis also supply theoretical evidence power enhancement compared with classical statistic. In application, method appealing statistical inference because it scalable any model. Finally, superior finite-sample performance evaluated simulation studies.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accelerating Stochastic Gradient Descent via Online Learning to Sample

Stochastic Gradient Descent (SGD) is one of the most widely used techniques for online optimization in machine learning. In this work, we accelerate SGD by adaptively learning how to sample the most useful training examples at each time step. First, we show that SGD can be used to learn the best possible sampling distribution of an importance sampling estimator. Second, we show that the samplin...

متن کامل

Online Learning, Stability, and Stochastic Gradient Descent

In batch learning, stability together with existence and uniqueness of the solution corresponds to well-posedness of Empirical Risk Minimization (ERM) methods; recently, it was proved that CVloo stability is necessary and sufficient for generalization and consistency of ERM ([9]). In this note, we introduce CVon stability, which plays a similar role in online learning. We show that stochastic g...

متن کامل

Learning by Online Gradient Descent

We study online gradient{descent learning in multilayer networks analytically and numerically. The training is based on randomly drawn inputs and their corresponding outputs as deened by a target rule. In the thermo-dynamic limit we derive deterministic diierential equations for the order parameters of the problem which allow an exact calculation of the evolution of the generalization error. Fi...

متن کامل

Online gradient descent learning algorithm†

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error whi...

متن کامل

Online Gradient Descent Learning Algorithms

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without explicit regularization. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i6.25886